Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 2.926
Filter
1.
PLoS One ; 19(4): e0298479, 2024.
Article in English | MEDLINE | ID: mdl-38625906

ABSTRACT

OBJECTIVES: (i) To identify peer reviewed publications reporting the mental and/or physical health outcomes of Deaf adults who are sign language users and to synthesise evidence; (ii) If data available, to analyse how the health of the adult Deaf population compares to that of the general population; (iii) to evaluate the quality of evidence in the identified publications; (iv) to identify limitations of the current evidence base and suggest directions for future research. DESIGN: Systematic review. DATA SOURCES: Medline, Embase, PsychINFO, and Web of Science. ELIGIBILITY CRITERIA FOR SELECTING STUDIES: The inclusion criteria were Deaf adult populations who used a signed language, all study types, including methods-focused papers which also contain results in relation to health outcomes of Deaf signing populations. Full-text articles, published in peer-review journals were searched up to 13th June 2023, published in English or a signed language such as ASL (American Sign Language). DATA EXTRACTION: Supported by the Rayyan systematic review software, two authors independently reviewed identified publications at each screening stage (primary and secondary). A third reviewer was consulted to settle any disagreements. Comprehensive data extraction included research design, study sample, methodology, findings, and a quality assessment. RESULTS: Of the 35 included studies, the majority (25 out of 35) concerned mental health outcomes. The findings from this review highlighted the inequalities in health and mental health outcomes for Deaf signing populations in comparison with the general population, gaps in the range of conditions studied in relation to Deaf people, and the poor quality of available data. CONCLUSIONS: Population sample definition and consistency of standards of reporting of health outcomes for Deaf people who use sign language should be improved. Further research on health outcomes not previously reported is needed to gain better understanding of Deaf people's state of health.


Subject(s)
Outcome Assessment, Health Care , Sign Language , Adult , Humans
2.
PLoS One ; 19(4): e0298699, 2024.
Article in English | MEDLINE | ID: mdl-38574042

ABSTRACT

Sign language recognition presents significant challenges due to the intricate nature of hand gestures and the necessity to capture fine-grained details. In response to these challenges, a novel approach is proposed-Lightweight Attentive VGG16 with Random Forest (LAVRF) model. LAVRF introduces a refined adaptation of the VGG16 model integrated with attention modules, complemented by a Random Forest classifier. By streamlining the VGG16 architecture, the Lightweight Attentive VGG16 effectively manages complexity while incorporating attention mechanisms that dynamically concentrate on pertinent regions within input images, resulting in enhanced representation learning. Leveraging the Random Forest classifier provides notable benefits, including proficient handling of high-dimensional feature representations, reduction of variance and overfitting concerns, and resilience against noisy and incomplete data. Additionally, the model performance is further optimized through hyperparameter optimization, utilizing the Optuna in conjunction with hill climbing, which efficiently explores the hyperparameter space to discover optimal configurations. The proposed LAVRF model demonstrates outstanding accuracy on three datasets, achieving remarkable results of 99.98%, 99.90%, and 100% on the American Sign Language, American Sign Language with Digits, and NUS Hand Posture datasets, respectively.


Subject(s)
Random Forest , Sign Language , Humans , Pattern Recognition, Automated/methods , Gestures , Upper Extremity
4.
Sensors (Basel) ; 24(5)2024 Feb 24.
Article in English | MEDLINE | ID: mdl-38475008

ABSTRACT

Sign language serves as the primary mode of communication for the deaf community. With technological advancements, it is crucial to develop systems capable of enhancing communication between deaf and hearing individuals. This paper reviews recent state-of-the-art methods in sign language recognition, translation, and production. Additionally, we introduce a rule-based system, called ruLSE, for generating synthetic datasets in Spanish Sign Language. To check the usefulness of these datasets, we conduct experiments with two state-of-the-art models based on Transformers, MarianMT and Transformer-STMC. In general, we observe that the former achieves better results (+3.7 points in the BLEU-4 metric) although the latter is up to four times faster. Furthermore, the use of pre-trained word embeddings in Spanish enhances results. The rule-based system demonstrates superior performance and efficiency compared to Transformer models in Sign Language Production tasks. Lastly, we contribute to the state of the art by releasing the generated synthetic dataset in Spanish named synLSE.


Subject(s)
Deep Learning , Humans , Sign Language , Hearing , Communication
6.
BMJ ; 384: 2615, 2024 02 28.
Article in English | MEDLINE | ID: mdl-38418094

Subject(s)
Deafness , Sign Language , Humans
8.
Sensors (Basel) ; 24(3)2024 Jan 26.
Article in English | MEDLINE | ID: mdl-38339542

ABSTRACT

Japanese Sign Language (JSL) is vital for communication in Japan's deaf and hard-of-hearing community. But probably because of the large number of patterns, 46 types, there is a mixture of static and dynamic, and the dynamic ones have been excluded in most studies. Few researchers have been working to develop a dynamic JSL alphabet, and their performance accuracy is unsatisfactory. We proposed a dynamic JSL recognition system using effective feature extraction and feature selection approaches to overcome the challenges. In the procedure, we follow the hand pose estimation, effective feature extraction, and machine learning techniques. We collected a video dataset capturing JSL gestures through standard RGB cameras and employed MediaPipe for hand pose estimation. Four types of features were proposed. The significance of these features is that the same feature generation method can be used regardless of the number of frames or whether the features are dynamic or static. We employed a Random forest (RF) based feature selection approach to select the potential feature. Finally, we fed the reduced features into the kernels-based Support Vector Machine (SVM) algorithm classification. Evaluations conducted on our proprietary newly created dynamic Japanese sign language alphabet dataset and LSA64 dynamic dataset yielded recognition accuracies of 97.20% and 98.40%, respectively. This innovative approach not only addresses the complexities of JSL but also holds the potential to bridge communication gaps, offering effective communication for the deaf and hard-of-hearing, and has broader implications for sign language recognition systems globally.


Subject(s)
Pattern Recognition, Automated , Sign Language , Humans , Japan , Pattern Recognition, Automated/methods , Hand , Algorithms , Gestures
9.
Science ; 383(6682): 519-523, 2024 Feb 02.
Article in English | MEDLINE | ID: mdl-38301028

ABSTRACT

Sign languages are naturally occurring languages. As such, their emergence and spread reflect the histories of their communities. However, limitations in historical recordkeeping and linguistic documentation have hindered the diachronic analysis of sign languages. In this work, we used computational phylogenetic methods to study family structure among 19 sign languages from deaf communities worldwide. We used phonologically coded lexical data from contemporary languages to infer relatedness and suggest that these methods can help study regular form changes in sign languages. The inferred trees are consistent in key respects with known historical information but challenge certain assumed groupings and surpass analyses made available by traditional methods. Moreover, the phylogenetic inferences are not reducible to geographic distribution but do affirm the importance of geopolitical forces in the histories of human languages.


Subject(s)
Language , Linguistics , Sign Language , Humans , Language/history , Linguistics/classification , Linguistics/history , Phylogeny
10.
J Biomech ; 165: 112011, 2024 Mar.
Article in English | MEDLINE | ID: mdl-38382174

ABSTRACT

Prior studies suggest that native (born to at least one deaf or signing parent) and non-native signers have different musculoskeletal health outcomes from signing, but the individual and combined biomechanical factors driving these differences are not fully understood. Such group differences in signing may be explained by the five biomechanical factors of American Sign Language that have been previously identified: ballistic signing, hand and wrist deviations, work envelope, muscle tension, and "micro" rests. Prior work used motion capture and surface electromyography to collect joint kinematics and muscle activations, respectively, from ten native and thirteen non-native signers as they signed for 7.5 min. Each factor was individually compared between groups. A factor analysis was used to determine the relative contributions of each biomechanical factor between signing groups. No significant differences were found between groups for ballistic signing, hand and wrist deviations, work envelope volume, excursions from recommended work envelope, muscle tension, or "micro" rests. Factor analysis revealed that "micro" rests had the strongest contribution for both groups, while hand and wrist deviations had the weakest contribution. Muscle tension and work envelope had stronger contributions for native compared to non-native signers, while ballistic signing had a stronger contribution for non-native compared to native signers. Using a factor analysis enabled discernment of relative contributions of biomechanical variables across native and non-native signers that could not be detected through isolated analysis of individual measures. Differences in the contributions of these factors may help explain the differences in signing across native and non-native signers.


Subject(s)
Hand , Sign Language , Humans , United States , Upper Extremity , Wrist , Factor Analysis, Statistical
11.
Sci Rep ; 14(1): 1043, 2024 01 10.
Article in English | MEDLINE | ID: mdl-38200108

ABSTRACT

The impact of adverse listening conditions on spoken language perception is well established, but the role of suboptimal viewing conditions on signed language processing is less clear. Viewing angle, i.e. the physical orientation of a perceiver relative to a signer, varies in many everyday deaf community settings for L1 signers and may impact comprehension. Further, processing from various viewing angles may be more difficult for late L2 learners of a signed language, with less variation in sign input while learning. Using a semantic decision task in a distance priming paradigm, we show that British Sign Language signers are slower and less accurate to comprehend signs shown from side viewing angles, with L2 learners in particular making disproportionately more errors when viewing signs from side angles. We also investigated how individual differences in mental rotation ability modulate processing signs from different angles. Speed and accuracy on the BSL task correlated with mental rotation ability, suggesting that signers may mentally represent signs from a frontal view, and use mental rotation to process signs from other viewing angles. Our results extend the literature on viewpoint specificity in visual recognition to linguistic stimuli. The data suggests that L2 signed language learners should maximise their exposure to diverse signed language input, both in terms of viewing angle and other difficult viewing conditions to maximise comprehension.


Subject(s)
Learning , Sign Language , Humans , Individuality , Linguistics , Physical Examination
12.
Sensors (Basel) ; 24(2)2024 Jan 11.
Article in English | MEDLINE | ID: mdl-38257544

ABSTRACT

Sign language is designed as a natural communication method to convey messages among the deaf community. In the study of sign language recognition through wearable sensors, the data sources are limited, and the data acquisition process is complex. This research aims to collect an American sign language dataset with a wearable inertial motion capture system and realize the recognition and end-to-end translation of sign language sentences with deep learning models. In this work, a dataset consisting of 300 commonly used sentences is gathered from 3 volunteers. In the design of the recognition network, the model mainly consists of three layers: convolutional neural network, bi-directional long short-term memory, and connectionist temporal classification. The model achieves accuracy rates of 99.07% in word-level evaluation and 97.34% in sentence-level evaluation. In the design of the translation network, the encoder-decoder structured model is mainly based on long short-term memory with global attention. The word error rate of end-to-end translation is 16.63%. The proposed method has the potential to recognize more sign language sentences with reliable inertial data from the device.


Subject(s)
Sign Language , Wearable Electronic Devices , Humans , United States , Motion Capture , Neurons , Perception
13.
J Deaf Stud Deaf Educ ; 29(2): 170-186, 2024 Mar 17.
Article in English | MEDLINE | ID: mdl-38160399

ABSTRACT

Deaf patients who communicate in American Sign Language (ASL) experience communication challenges leading to medical errors, treatment delays, and health disparities. Research on Deaf patient communication preferences is sparse. Researchers conducted focus groups based on the Health Belief Model with culturally Deaf patients and interpreters. The ASL focus groups were interpreted and transcribed into written English, verified by a third-party interpreting agency, and uploaded into NVivo. Deductive coding was used to identify communication methods and inductive coding was used to identify themes within each. Writing back-and-forth introduced challenges related to English proficiency, medical terminology, poor penmanship, and tendencies of providers to abbreviate. Participants had various speechreading abilities and described challenges with mask mandates. Multiple issues were identified with family and friends as proxy interpreters, including a lack of training, confidentiality issues, emotional support, and patient autonomy. Video remote interpreter challenges included technical, environmental, and interpreter qualification concerns. Participants overwhelmingly preferred on-site interpreters for communication clarity. While there was a preference for direct care, many acknowledged this is not always feasible due to lack of providers fluent in ASL. Access to on-site interpreters is vital for many Deaf patients to provide full access to critical medical information. Budget allocation for on-call interpreters is important in emergency settings.


Subject(s)
Deafness , Humans , Communication , Sign Language , Focus Groups , Health Personnel , Communication Barriers , Translating
14.
Int J Biol Macromol ; 258(Pt 2): 129068, 2024 Feb.
Article in English | MEDLINE | ID: mdl-38158069

ABSTRACT

Conductive hydrogel which belongs to a type of soft materials has recently become promising candidate for flexible electronics application. However, it remains difficult for conductive hydrogel-based strain sensors to achieve the organic unity of large stretchability, high conductivity, self-healing, anti-freezing, anti-drying and transparency. Herein, a multifunctional conductive organohydrogel with all of the above superiorities is prepared by crosslinking polyacrylamide (PAM) with dialdehyde starch (DAS) in glycerol-water binary solvent. Attributing to the synergy of abundant hydrogen bonding and Schiff base interactions caused by introducing glycerol and dialdehyde starch, respectively, the organohydrogel achieved balanced mechanical and electrical properties. Besides, the addition of glycerol promoted the water-locking effects, making the organohydrogel retain the superior mechanical properties and conductivity even at extreme conditions. The resultant organohydrogel strain sensor exhibits desirable sensing performance with high sensitivity (GF = 6.07) over a wide strain range (0-697 %), enabling the accurate monitoring of subtle body motions even at -30 °C. On the basis, a hand gesture monitor system based on the organohydrogel sensors arrays is constructed using machine learning method, achieving a considerable sign language recognition rate of 100 %, and thus providing convenience for communications between the hearing or speaking-impaired and general person.


Subject(s)
Glycerol , Sign Language , Starch/analogs & derivatives , Humans , Electric Conductivity , Hydrogels , Water
15.
Dev Sci ; 27(1): e13416, 2024 Jan.
Article in English | MEDLINE | ID: mdl-37255282

ABSTRACT

The hypothesis that impoverished language experience affects complex sentence structure development around the end of early childhood was tested using a fully randomized, sentence-to-picture matching study in American Sign Language (ASL). The participants were ASL signers who had impoverished or typical access to language in early childhood. Deaf signers whose access to language was highly impoverished in early childhood (N = 11) primarily comprehended structures consisting of a single verb and argument (Subject or Object), agreeing verbs, and the spatial relation or path of semantic classifiers. They showed difficulty comprehending more complex sentence structures involving dual lexical arguments or multiple verbs. As predicted, participants with typical language access in early childhood, deaf native signers (N = 17) or hearing second-language learners (N = 10), comprehended the range of 12 ASL sentence structures, independent of the subjective iconicity or frequency of the stimulus lexical items, or length of ASL experience and performance on non-verbal cognitive tasks. The results show that language experience in early childhood is necessary for the development of complex syntax. RESEARCH HIGHLIGHTS: Previous research with deaf signers suggests an inflection point around the end of early childhood for sentence structure development. Deaf signers who experienced impoverished language until the age of 9 or older comprehend several basic sentence structures but few complex structures. Language experience in early childhood is necessary for the development of complex sentence structure.


Subject(s)
Deafness , Language , Child, Preschool , Humans , Sign Language , Semantics , Hearing
16.
PLoS One ; 18(12): e0295398, 2023.
Article in English | MEDLINE | ID: mdl-38060609

ABSTRACT

Sign language (SL) has strong structural features. Various gestures and the complex trajectories of hand movements bring challenges to sign language recognition (SLR). Based on the inherent correlation between gesture and trajectory of SL action, SLR is organically divided into gesture-based recognition and gesture-related movement trajectory recognition. One hundred and twenty commonly used Chinese SL words involving 9 gestures and 8 movement trajectories, are selected as research and test objects. The method based on the amplitude state of surface electromyography (sEMG) signal and acceleration signal is used for vocabulary segmentation. The multi-sensor decision fusion method of coupled hidden Markov model is used to complete the recognition of SL vocabulary, and the average recognition rate is 90.41%. Experiments show that the method of sEMG signal and motion information fusion has good practicability in SLR.


Subject(s)
Pattern Recognition, Automated , Sign Language , Humans , Electromyography , Gestures , Hand , China , Algorithms
17.
Codas ; 35(6): e20220184, 2023.
Article in Spanish, English | MEDLINE | ID: mdl-38055413

ABSTRACT

PURPOSE: Obtain evidence of the test reliability to evaluate the perception of minimum contrasts in Chilean Sign Language (LSCh). METHODS: Ten deaf children and adolescents aged between 7 and 14 years participated in this study. They were evaluated with the test of perception of minimal contrasts in LSCh. The test was reapplied 11 and 14 days after the first application (test-retest reliability). Spearman's Rho correlation was performed. During the first application, authorization was requested from the parents of the children and adolescents to record the responses of the participants so that another evaluator could re-score the protocols, in order to obtain inter-rater reliability. First-order agreement coefficient (AC1) Gwet's was used for data analysis. RESULTS: Test-retest obtained a strong and significant correlation (Rho= 0.741; p=0.014). The concordance values obtained inter-rater vary between 0.962 and 1 (p<0.001), indicating that the test presents almost perfect concordance. CONCLUSION: The minimum pairs perception test in LSCh presents satisfactory test-retest and inter-rater reliability.


OBJETIVO: Obtener evidencias de confiabilidad de la prueba para evaluar la percepción de los contrastes mínimos en Lengua de Señas Chilena (LSCh). MÉTODO: Participaron 10 niños y adolescentes Sordos con edades entre los 7 y 14 años, que fueron evaluados con la prueba de percepción de los contrastes mínimos en LSCh. En un primer momento se les aplicó la prueba, y entre 11 y 14 días después se les reaplicó nuevamente (confiabilidad test - retest). Para analizar los datos, fue realizada la correlación Rho de Spearman. Durante la primera aplicación se solicitó autorización a los padres de los niños y adolescentes para grabar las respuestas de los participantes para que otro evaluador pudiese repuntuar los protocolos, con el fin de obtener la confiabilidad interevaluador. Para el análisis de los datos se utilizó el cálculo estadístico first-order agreement coefficient (AC1) de Gwet. RESULTADOS: En la confiabilidad test - retest se obtuvo una correlación fuerte y significativa (Rho= 0,741; p=0,014). En la confiabilidad interevaluador, los valores de concordancia obtenidos varían entre 0,962 a 1 (p<0,001), indicando que la prueba presenta concordancia casi perfecta. CONCLUSIÓN: La prueba de percepción de pares de mínimos en LSCh presenta confiabilidad test - retest e interevaluador satisfactoria.


Subject(s)
Perception , Sign Language , Child , Adolescent , Humans , Reproducibility of Results , Chile
18.
Health Lit Res Pract ; 7(4): e215-e224, 2023 Oct.
Article in English | MEDLINE | ID: mdl-38061760

ABSTRACT

BACKGROUND: Previous studies showed that deaf and hard-of-hearing (DHH) individuals have low health literacy related to prescription labels. This study examined the DHH's experience with understanding prescription labels and how technology can impact that experience. OBJECTIVES: The purpose of this qualitative study was twofold: (1) gain a more enhanced understanding of DHH experiences in understanding prescription labels with a focus on language needs, expectations, and preferences, and (2) assess the potential role of technology in addressing the communication-related accessibility issues which emerge from the data. METHODS: In this study, 25 Deaf American Sign Language users who picked up a prescription from a pharmacy within the past year were interviewed. A thematic analysis, which included a systematic coding process, was used to uncover themes about their experiences picking up and using prescription medications. KEY RESULTS: Thematic analyses identified that medication-related experiences centered around themes: (1) medication information seeking; (2) comfort taking medication; (3) picking up medication; and (4) communication with the pharmacy team. A large contributor to the communication experience was the perception that the pharmacist was not being respectful. Regarding comfort taking medications, 12% of participants expressed a lack of understanding medications while taking medication. This led to participants largely using online resources when seeking medication information. This study also found that technology greatly aided the participants during this experience. CONCLUSION: This study recorded the experiences within the context of limited health literacy and aversive audism found that the DHH individual repeatedly encountered communication barriers, which may contribute to their poor medication literacy. Thus, future studies should explore how to leverage the potential benefits of technology to improve the pharmacy experience of the DHH, thereby improving medication literacy. [HLRP: Health Literacy Research and Practice. 2023;7(4):e215-e224.].


PLAIN LANGUAGE SUMMARY: Previous studies have shown that deaf and hard-of-hearing (DHH) individuals have low health literacy and higher rates of unintentional medication misuse. DHH participants described their experiences related to the pharmacy and technology, as situated around negative attitudes and language barriers. Based on four themes, which emerged from our analysis, we identified areas where may help to reduce these care inequities.


Subject(s)
Hearing Loss , Persons With Hearing Impairments , Humans , United States , Sign Language , Language , Communication
19.
Article in English | MEDLINE | ID: mdl-38082804

ABSTRACT

Hand gesture classification is of high importance in any sign language recognition (SLR) system, which is expected to assist individuals suffering from hearing and speech impairment. American sign language (ASL) comprises of static and dynamic gestures representing many alphabets, phrases, and words. ASL recognition system allows us to digitize communication and use it effectively within or outside the hearing-deprived community. Developing an ASL recognition system has been a challenge since some of the involved hand gestures closely resemble each other, and thereby it demands high discriminability features to classify these gestures. SLR through surface-based electromyography (sEMG) signals is computationally intensive to process and using inertial measurement units (IMUs) or flex sensors for SLR occupies too much space on the patient's hand. Video-based recognition systems place restrictions on the users by requiring them to make gestures or motions within the camera's field of view. A novel approach with a precision preserved static gesture classification system is proposed to fulfill the much-needed gap. The paper proposes an array of magnetometers-enabled static hand gesture classification system that offers an average accuracy of 98.60% for classifying alphabets and 94.07% for digits using the KNN classification model. The magnetometer array-based wearable system is devised to minimize the electronics coverage around the hand, and yet establish robust classification results that are useful for ASL recognition. The paper discusses the design of the proposed SLR system and also looks into optimizations that can be made to reduce the cost of the system.Clinical relevance - The proposed novel magnetometer array-based wearable system is cost-effective and works well across different hand sizes. It occupies a negligible amount of space on the user's hand and thus does not interfere with the user's everyday tasks. It is reliable, robust, and error-free for easy adoption towards building ASL recognition system.


Subject(s)
Gestures , Wearable Electronic Devices , Humans , Sign Language , Pattern Recognition, Automated/methods , Upper Extremity
20.
Sensors (Basel) ; 23(23)2023 Nov 23.
Article in English | MEDLINE | ID: mdl-38067738

ABSTRACT

This paper proposes, analyzes, and evaluates a deep learning architecture based on transformers for generating sign language motion from sign phonemes (represented using HamNoSys: a notation system developed at the University of Hamburg). The sign phonemes provide information about sign characteristics like hand configuration, localization, or movements. The use of sign phonemes is crucial for generating sign motion with a high level of details (including finger extensions and flexions). The transformer-based approach also includes a stop detection module for predicting the end of the generation process. Both aspects, motion generation and stop detection, are evaluated in detail. For motion generation, the dynamic time warping distance is used to compute the similarity between two landmarks sequences (ground truth and generated). The stop detection module is evaluated considering detection accuracy and ROC (receiver operating characteristic) curves. The paper proposes and evaluates several strategies to obtain the system configuration with the best performance. These strategies include different padding strategies, interpolation approaches, and data augmentation techniques. The best configuration of a fully automatic system obtains an average DTW distance per frame of 0.1057 and an area under the ROC curve (AUC) higher than 0.94.


Subject(s)
Algorithms , Sign Language , Humans , Motion , Movement , Hand
SELECTION OF CITATIONS
SEARCH DETAIL
...